6 research outputs found

    Toward an Understanding of Responsible Artificial Intelligence Practices

    Get PDF
    Artificial Intelligence (AI) is influencing all aspects of human and business activities nowadays. Although potential benefits emerged from AI technologies have been widely discussed in many current literature, there is an urgently need to understand how AI can be designed to operate responsibly and act in a manner meeting stakeholders’ expectations and applicable regulations. We seek to fill the gap by exploring the practices of responsible AI and identifying the potential benefits when implementing responsible AI practices. In this study, 10 responsible AI cases were selected from different industries to better understand the use of responsible AI in practices. Four responsible AI practices are identified, including governance, ethically design solutions, risk control and training and education and five strategies for firms who are considering to adopt responsible AI practices are recommended

    Accelerating AI adoption with responsible AI signals and employee engagement mechanisms in health care

    No full text
    Artificial Intelligence (AI) technology is transforming the healthcare sector. However, despite this, the associated ethical implications remain open to debate. This research investigates how signals of AI responsibility impact healthcare practitioners’ attitudes toward AI, satisfaction with AI, AI usage intentions, including the underlying mechanisms. Our research outlines autonomy, beneficence, explainability, justice, and non-maleficence as the five key signals of AI responsibility for healthcare practitioners. The findings reveal that these five signals significantly increase healthcare practitioners’ engagement, which subsequently leads to more favourable attitudes, greater satisfaction, and higher usage intentions with AI technology. Moreover, ‘techno-overload’ as a primary ‘techno-stressor’ moderates the mediating effect of engagement on the relationship between AI justice and behavioural and attitudinal outcomes. When healthcare practitioners perceive AI technology as adding extra workload, such techno-overload will undermine the importance of the justice signal and subsequently affect their attitudes, satisfaction, and usage intentions with AI technology
    corecore